2,230 research outputs found
A Methodology for Discovering how to Adaptively Personalize to Users using Experimental Comparisons
We explain and provide examples of a formalism that supports the methodology
of discovering how to adapt and personalize technology by combining randomized
experiments with variables associated with user models. We characterize a
formal relationship between the use of technology to conduct A/B experiments
and use of technology for adaptive personalization. The MOOClet Formalism [11]
captures the equivalence between experimentation and personalization in its
conceptualization of modular components of a technology. This motivates a
unified software design pattern that enables technology components that can be
compared in an experiment to also be adapted based on contextual data, or
personalized based on user characteristics. With the aid of a concrete use
case, we illustrate the potential of the MOOClet formalism for a methodology
that uses randomized experiments of alternative micro-designs to discover how
to adapt technology based on user characteristics, and then dynamically
implements these personalized improvements in real time
Supporting Instructors in Collaborating with Researchers using MOOClets
Most education and workplace learning takes place in classroom contexts far
removed from laboratories or field sites with special arrangements for
scientific research. But digital online resources provide a novel opportunity
for large scale efforts to bridge the real world and laboratory settings which
support data collection and randomized A/B experiments comparing different
versions of content or interactions [2]. However, there are substantial
technological and practical barriers in aligning instructors and researchers to
use learning technologies like blended lessons/exercises & MOOCs as both a
service for students and a realistic context to conduct research. This paper
explains how the concept of a MOOClet can facilitate research-practitioner
collaborations. MOOClets [3] are defined as modular components of a digital
resource that can be implemented in technology to: (1) allow modification to
create multiple versions, (2) allow experimental comparison and personalization
of different versions, (3) reliably specify what data are collected. We suggest
a framework in which instructors specify what kinds of changes to lessons,
exercises, and emails they would be willing to adopt, and what data they will
collect and make available. Researchers can then: (1) specify or design
experiments that compare the effects of different versions on quantifiable
outcomes. (2) Explore algorithms for maximizing particular outcomes by choosing
alternative versions of a MOOClet based on the input variables available. We
present a prototype survey tool for instructors intended to facilitate
practitioner researcher matches and successful collaborations.Comment: 4 page
Student Usage of Q&A Forums: Signs of Discomfort?
Q&A forums are widely used in large classes to provide scalable support. In
addition to offering students a space to ask questions, these forums aim to
create a community and promote engagement. Prior literature suggests that the
way students participate in Q&A forums varies and that most students do not
actively post questions or engage in discussions. Students may display
different participation behaviours depending on their comfort levels in the
class. This paper investigates students' use of a Q&A forum in a CS1 course. We
also analyze student opinions about the forum to explain the observed
behaviour, focusing on students' lack of visible participation (lurking,
anonymity, private posting). We analyzed forum data collected in a CS1 course
across two consecutive years and invited students to complete a survey about
perspectives on their forum usage. Despite a small cohort of highly engaged
students, we confirmed that most students do not actively read or post on the
forum. We discuss students' reasons for the low level of engagement and
barriers to participating visibly. Common reasons include fearing a lack of
knowledge and repercussions from being visible to the student community.Comment: To be published at ITiCSE 202
Opportunities for Adaptive Experiments to Enable Continuous Improvement that Trades-off Instructor and Researcher Incentives
Randomized experimental comparisons of alternative pedagogical strategies
could provide useful empirical evidence in instructors' decision-making.
However, traditional experiments do not have a clear and simple pathway to
using data rapidly to try to increase the chances that students in an
experiment get the best conditions. Drawing inspiration from the use of machine
learning and experimentation in product development at leading technology
companies, we explore how adaptive experimentation might help in continuous
course improvement. In adaptive experiments, as different arms/conditions are
deployed to students, data is analyzed and used to change the experience for
future students. This can be done using machine learning algorithms to identify
which actions are more promising for improving student experience or outcomes.
This algorithm can then dynamically deploy the most effective conditions to
future students, resulting in better support for students' needs. We illustrate
the approach with a case study providing a side-by-side comparison of
traditional and adaptive experimentation of self-explanation prompts in online
homework problems in a CS1 course. This provides a first step in exploring the
future of how this methodology can be useful in bridging research and practice
in doing continuous improvement
Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning
We design and implement an adaptive experiment (a ``contextual bandit'') to
learn a targeted treatment assignment policy, where the goal is to use a
participant's survey responses to determine which charity to expose them to in
a donation solicitation. The design balances two competing objectives:
optimizing the outcomes for the subjects in the experiment (``cumulative regret
minimization'') and gathering data that will be most useful for policy
learning, that is, for learning an assignment rule that will maximize welfare
if used after the experiment (``simple regret minimization''). We evaluate
alternative experimental designs by collecting pilot data and then conducting a
simulation study. Next, we implement our selected algorithm. Finally, we
perform a second simulation study anchored to the collected data that evaluates
the benefits of the algorithm we chose. Our first result is that the value of a
learned policy in this setting is higher when data is collected via a uniform
randomization rather than collected adaptively using standard cumulative regret
minimization or policy learning algorithms. We propose a simple heuristic for
adaptive experimentation that improves upon uniform randomization from the
perspective of policy learning at the expense of increasing cumulative regret
relative to alternative bandit algorithms. The heuristic modifies an existing
contextual bandit algorithm by (i) imposing a lower bound on assignment
probabilities that decay slowly so that no arm is discarded too quickly, and
(ii) after adaptively collecting data, restricting policy learning to select
from arms where sufficient data has been gathered
Getting too personal(ized): The importance of feature choice in online adaptive algorithms
Digital educational technologies offer the potential to customize students'
experiences and learn what works for which students, enhancing the technology
as more students interact with it. We consider whether and when attempting to
discover how to personalize has a cost, such as if the adaptation to personal
information can delay the adoption of policies that benefit all students. We
explore these issues in the context of using multi-armed bandit (MAB)
algorithms to learn a policy for what version of an educational technology to
present to each student, varying the relation between student characteristics
and outcomes and also whether the algorithm is aware of these characteristics.
Through simulations, we demonstrate that the inclusion of student
characteristics for personalization can be beneficial when those
characteristics are needed to learn the optimal action. In other scenarios,
this inclusion decreases performance of the bandit algorithm. Moreover,
including unneeded student characteristics can systematically disadvantage
students with less common values for these characteristics. Our simulations do
however suggest that real-time personalization will be helpful in particular
real-world scenarios, and we illustrate this through case studies using
existing experimental results in ASSISTments. Overall, our simulations show
that adaptive personalization in educational technologies can be a double-edged
sword: real-time adaptation improves student experiences in some contexts, but
the slower adaptation and potentially discriminatory results mean that a more
personalized model is not always beneficial.Comment: 11 pages, 6 figures. Correction to the original article published at
https://files.eric.ed.gov/fulltext/ED607907.pdf : The Thompson sampling
algorithm in the original article overweights older data resulting in an
overexploitative multi-armed bandit. This arxiv version uses a normal
Thompson sampling algorith
Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception
Personalized chatbot-based teaching assistants can be crucial in addressing
increasing classroom sizes, especially where direct teacher presence is
limited. Large language models (LLMs) offer a promising avenue, with increasing
research exploring their educational utility. However, the challenge lies not
only in establishing the efficacy of LLMs but also in discerning the nuances of
interaction between learners and these models, which impact learners'
engagement and results. We conducted a formative study in an undergraduate
computer science classroom (N=145) and a controlled experiment on Prolific
(N=356) to explore the impact of four pedagogically informed guidance
strategies and the interaction between student approaches and LLM responses.
Direct LLM answers marginally improved performance, while refining student
solutions fostered trust. Our findings suggest a nuanced relationship between
the guidance provided and LLM's role in either answering or refining student
input. Based on our findings, we provide design recommendations for optimizing
learner-LLM interactions
Recommended from our members
Nonprofessional Peer Support to Improve Mental Health: Randomized Trial of a Scalable Web-Based Peer Counseling Course
Background: Millions of people worldwide are underserved by the mental health care system. Indeed, most mental health problems go untreated, often because of resource constraints (eg, limited provider availability and cost) or lack of interest or faith in professional help. Furthermore, subclinical symptoms and chronic stress in the absence of a mental illness diagnosis often go unaddressed, despite their substantial health impact. Innovative and scalable treatment delivery methods are needed to supplement traditional therapies to fill these gaps in the mental health care system.
Objective: This study aims to investigate whether a self-guided web-based course can teach pairs of nonprofessional peers to deliver psychological support to each other.
Methods: In this experimental study, a community sample of 30 dyads (60 participants, mostly friends), many of whom presented with mild to moderate psychological distress, were recruited to complete a web-based counseling skills course. Dyads were randomized to either immediate or delayed access to training. Before and after training, dyads were recorded taking turns discussing stressors. Participants’ skills in the helper role were assessed before and after taking the course: the first author and a team of trained research assistants coded recordings for the presence of specific counseling behaviors. When in the client role, participants rated the session on helpfulness in resolving their stressors and supportiveness of their peers. We hypothesized that participants would increase the use of skills taught by the course and decrease the use of skills discouraged by the course, would increase their overall adherence to the guidelines taught in the course, and would perceive posttraining counseling sessions as more helpful and their peers as more supportive.
Results: The course had large effects on most helper-role speech behaviors: helpers decreased total speaking time, used more restatements, made fewer efforts to influence the speaker, and decreased self-focused and off-topic utterances (ds=0.8-1.6). When rating the portion of the session in which they served as clients, participants indicated that they made more progress in addressing their stressors during posttraining counseling sessions compared with pretraining sessions (d=1.1), but they did not report substantive changes in feelings of closeness and supportiveness of their peers (d=0.3).
Conclusions: The results provide proof of concept that nonprofessionals can learn basic counseling skills from a scalable web-based course. The course serves as a promising model for the development of web-based counseling skills training, which could provide accessible mental health support to some of those underserved by traditional psychotherapy
- …